Goto

Collaborating Authors

 scaling factorial hidden markov model


Scaling Factorial Hidden Markov Models: Stochastic Variational Inference without Messages

Neural Information Processing Systems

Factorial Hidden Markov Models (FHMMs) are powerful models for sequential data but they do not scale well with long sequences. We propose a scalable inference and learning algorithm for FHMMs that draws on ideas from the stochastic variational inference, neural network and copula literatures. Unlike existing approaches, the proposed algorithm requires no message passing procedure among latent variables and can be distributed to a network of computers to speed up learning. Our experiments corroborate that the proposed algorithm does not introduce further approximation bias compared to the proven structured mean-field algorithm, and achieves better performance with long sequences and large FHMMs.


Reviews: Scaling Factorial Hidden Markov Models: Stochastic Variational Inference without Messages

Neural Information Processing Systems

Technical quality: This paper tackles the problem of inference and learning of factored HMMs on large sequences and with large latent dimensionality. The primary contribution of the paper is integrating several existing approaches together to enable large-scale learning of FHMMs without a loss in modeling performance. The technical details of the components of the approach (the bivariate Gaussian copula variation posterior, the recognition network, the SVI learning approach) appear to be technically correct. The experimentation touches on the correct points including the accuracy of the learned models and the scalability of the proposed approach. The accuracy of the learned models is assessed using log likelihood on held-out test data. The experiments show that the model performs similarly to the SMF approach on both simulated and real (the Bach Corals) data.


Scaling Factorial Hidden Markov Models: Stochastic Variational Inference without Messages

Ng, Yin Cheng, Chilinski, Pawel M., Silva, Ricardo

Neural Information Processing Systems

Factorial Hidden Markov Models (FHMMs) are powerful models for sequential data but they do not scale well with long sequences. We propose a scalable inference and learning algorithm for FHMMs that draws on ideas from the stochastic variational inference, neural network and copula literatures. Unlike existing approaches, the proposed algorithm requires no message passing procedure among latent variables and can be distributed to a network of computers to speed up learning. Our experiments corroborate that the proposed algorithm does not introduce further approximation bias compared to the proven structured mean-field algorithm, and achieves better performance with long sequences and large FHMMs. Papers published at the Neural Information Processing Systems Conference.